In [36]:
using ApproxFun, Plots, ComplexPhasePortrait, ApproxFun,  SingularIntegralEquations,
        SpecialFunctions

using SingularIntegralEquations.HypergeometricFunctions
gr();

M3M6: Methods of Mathematical Physics

$$ \def\dashint{{\int\!\!\!\!\!\!-\,}} \def\infdashint{\dashint_{\!\!\!-\infty}^{\,\infty}} \def\D{\,{\rm d}} \def\E{{\rm e}} \def\dx{\D x} \def\dt{\D t} \def\dz{\D z} \def\ds{\D s} \def\C{{\mathbb C}} \def\R{{\mathbb R}} \def\H{{\mathbb H}} \def\CC{{\cal C}} \def\HH{{\cal H}} \def\FF{{\cal F}} \def\I{{\rm i}} \def\Ei{{\rm Ei}\,} \def\qqqquad{\qquad\qquad} \def\qqand{\qquad\hbox{and}\qquad} \def\qqfor{\qquad\hbox{for}\qquad} \def\qqwhere{\qquad\hbox{where}\qquad} \def\Res_#1{\underset{#1}{\rm Res}}\, \def\sech{{\rm sech}\,} \def\acos{\,{\rm acos}\,} \def\erfc{\,{\rm erfc}\,} \def\vc#1{{\mathbf #1}} \def\ip<#1,#2>{\left\langle#1,#2\right\rangle} \def\br[#1]{\left[#1\right]} \def\norm#1{\left\|#1\right\|} \def\half{{1 \over 2}} \def\fL{f_{\rm L}} \def\fR{f_{\rm R}} \def\HF{{}_2F_1} \def\questionequals{= \!\!\!\!\!\!{\scriptstyle ? \atop }\,\,\,} $$

Dr Sheehan Olver
s.olver@imperial.ac.uk

Office Hours: 3-4pm Mondays, 11-12am Thursdays, Huxley 6M40
Website: https://github.com/dlfivefifty/M3M6LectureNotes

Chapter 6: Special functions

A special function is a function that can't be expressed in closed form in terms of classical functions, like $\cos$, $\sin$. We've seen a few special functions so far: \begin{align*} \Ei z &= \int_{-\infty}^z {\E^\zeta \over \zeta} \D \zeta \\ \erfc z &= {2 \over \sqrt \pi} \int_z^\infty \E^{-\zeta^2} \D \zeta \\ \Gamma(\alpha, z) &= \int_z^\infty \zeta^{\alpha-1} \E^{-\zeta} \D\zeta. \end{align*} But we've also seen special functions in the form of orthogonal polynomials:

  1. $P_n^{(a,b)}(x)$ are orthogonal w.r.t. $(1-x)^a(1+x)^b$
  2. $L_n^{(a)}(x)$ are orthogonal w.r.t. $x^a \E^{-x}$
  3. $H_n(x)$ are orthogonal w.r.t. $\E^{-x^2}$

Lecture 26: Analyticity of solutions of ordinary differential equations

Most special functions solve simple ODEs involving very low order rational functions. For example, these three special functions satisfy second ODEs:

  1. $u(z) = \E^{-z} \Ei z$ satisfies \begin{align*} {\D u \over \dz} + u &= {1 \over z}\qquad\Rightarrow \\ z {\D^2 u \over \dz^2} + (z+1) {\D u \over \dz} + u &= 0 \end{align*}
  2. $u(z) = {\sqrt \pi \over 2} \E^{z^2} \erfc z$ satisfies \begin{align*} {\D u \over \dz} - 2 z u &= 1 \qquad\Rightarrow \\ {\D^2 u \over \dz^2} -2z {\D u \over \dz} -2 u &= 0 \end{align*}
  3. $u(z) = \E^{z} \Gamma(\alpha, z)$ satisfies \begin{align*} {\D u \over \dz} - u &= z^{\alpha-1} \qquad\Rightarrow \\ z {\D^2 u \over \dz^2} + (1- \alpha -z) {\D u \over \dz} + (\alpha-1)u &= 0 \end{align*}

  4. Laguerre satisfies $$ x {\D^2 L_n^{(a)} \over \dx^2} + (a+1-x) {\D L_n^{(a)} \over \dx} + n L_n^{(a)} = 0 $$

  5. Hermite satisfies $$ {\D^2 H_n \over \dx^2} -2x{\D H_n \over \dx} + 2n H_n = 0 $$

A natural question becomes what is the relationship between the singularities of the variable coefficients and the singularities of the solutions?

  1. General properties of ODEs in the complex plane
    • Solving an ODE on a contour
    • Radius of convergence
    • Analytic continuation

ODEs on contours

Consider the solution of a first order ODE $$ {\D u\over \dz} = a(z) u\qqand u(z_0) = c $$ which we can write as $$ u(z) = c \E^{\int_{z_0}^z a(\zeta) \D \zeta} $$ That is, we can think of the solution as living on a contour, corresponding to the contour of integration of the integral.

Alternatively, we can think of the ODE as living on a contour $\gamma : (a,b) \rightarrow \C$, in the first order case we do the change of variables $v(t) = u(\gamma(t))$, the ODE is reduced to $$ {\D v \over \dt} = \gamma'(t) u'(\gamma(t)) = \gamma'(t) a(\gamma(t)) u(\gamma(t)) = \gamma'(t) a(\gamma(t)) v $$ Thus provided we can choose the contour to avoid the singularities of $a(z)$, we can define the solution, but the value of $u(z)$ can depend on the choice of contour.

Normally, the contour is taken as a straight line, so that poles in $a(z)$ can induce branch cuts in $u(z)$.

Example consider $$ {\D u\over \dz} = u \qqand u(0) = 1 $$ with solution $u(z) = \E^z$. Consider a contour like $\gamma(t) = (1+\I )t$. Then we have for $v(t) = u(\gamma(t)) = \E^{(1+\I )t}$ that $v$ satisfies the ODE $$ {\D v \over \dt} = (1+\I t) v \qqand v(0) = 1 $$

Example Now consider an ODE with a pole: $$ {\D u\over \dz} = {k u \over z} \qqand u(1) = 1 $$ with solution $u(z) = z^k$. Consider two different choices of contours: $\gamma_1(t) = \E^{\I t}$ and $ \gamma_2(t) = \E^{-\I t}$ for $0 \leq t \leq 2\pi$. For $v_1(t) = u(\gamma_1(t))$ we have the ODE: $$ {\D v_1\over \dt} = \I k v_1 \qqand v(0) = 1 $$ with solution $v_1(t) = \E^{\I k t}$ (and similarly $v_2(t) = \E^{- \I k t}$). Hence we have \begin{align*} u(1) &= u(\E^{2 \I \pi}) \questionequals v_1(2\pi) = \E^{2 \pi \I k} \\ u(1) &= u(\E^{-2 \I \pi}) \questionequals v_2(2\pi) = \E^{-2 \pi \I k} \end{align*}. When $k$ is not an integer, each of these is a different number.

Radius of convergence of ODEs

This non-uniqueness means we think of solving an ODE in terms along a contour. In what sense is $u(z)$ analytic? Well, we can deduce that the radius of convergence of the solution $z_0$ is dictated by the radius of convergence of $a(z)$, that is, the closest singularity.

Theorem Suppose $a(z)$ is analytic in a disk of radius $R$. Then $u(z)$ is also analytic in a disk of radius $R$.

Sketch of proof We will show this using Taylor series (using operator notation). Note that if we represent (here we take $z_0 = 0$): $$ u(z) = u_0 + u_1 z+ u_2 z^2 + \cdots = (1,z,z^2,\ldots) \begin{pmatrix} u_0\\u_1\\u_2\\\vdots \end{pmatrix} $$ The derivative operator has a very nice simple form: $$ u'(z) = (1,z,z^2,\ldots) \begin{pmatrix} 0 & 1 \\ && 2 \\ &&&3 \\ &&&&\ddots \end{pmatrix} \begin{pmatrix} u_0\\u_1\\u_2 \\ \vdots \end{pmatrix} $$ On the other hand, multiplication by $z$ has the following operator form: $$ z u(z) = \begin{pmatrix} 0 \\ 1 \\ & 1 \\ &&1 \\ &&&\ddots \end{pmatrix} \begin{pmatrix} u_0\\u_1\\u_2 \\ \vdots \end{pmatrix} $$ Each time we multiply by $z$, this expression gets shift down. Thus multiplication by $$ a(z) = a_0 + a_1 z+ a_2 z^2 + \cdots $$ has the form $$ a(z) u(z) = \begin{pmatrix} a_0 \\ a_1 & a_0 \\ a_2 & a_1 & a_0 \\ a_3 & a_2 & a_1 & a_0 \\ \vdots &\ddots&\ddots&\ddots&\ddots \end{pmatrix} \begin{pmatrix} u_0\\u_1\\u_2 \\ \vdots \end{pmatrix} $$ Thus the ODE $u'(z) - a(z) u(z)= 0$ and $u(0) = c$ becomes: $$ \begin{pmatrix} 1 \\ -a_0 & 1 \\ -a_1 & -a_0 & 2 \\ -a_2 & -a_1 & -a_0 & 3 \\ -a_3 & -a_2 & -a_1 & -a_0 & 4 \\ \vdots &\ddots&\ddots&\ddots&\ddots & \ddots \end{pmatrix} \begin{pmatrix} u_0\\u_1\\u_2 \\ \vdots \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\\vdots \end{pmatrix} $$ This is solvable via forward substitution.

Assume that the radius of convergence of $a$ is $R$, that is, for every $r < R$ we have $|a_k| \leq {C(r) \over r^k}$ for some constant $C$. The worst case in the growth of $u_k$ is in the case every $a_k$ is positive, therefore, we have $$ \left| \begin{pmatrix} u_0\\u_1\\u_2 \\ \vdots \end{pmatrix} \right| \leq \begin{pmatrix} 1 \\ -C & 1 \\ -C r^{-1} & -C & 2 \\ -C r^{-2} & -C r^{-1} & -C & 3 \\ -C r^{-3} & -C r^{-2} & -C r^{-1} & -C & 4 \\ \vdots &\ddots&\ddots&\ddots&\ddots & \ddots \end{pmatrix}^{-1}\begin{pmatrix} 1 \\ 0 \\\vdots \end{pmatrix} $$ That is, we can bound $|u_k| < w_k$ where $w_k$ solves $$ \begin{pmatrix} 1 \\ -C & 1 \\ -C r^{-1} & -C & 2 \\ -C r^{-2} & -C r^{-1} & -C & 3 \\ -C r^{-3} & -C r^{-2} & -C r^{-1} & -C & 4 \\ \vdots &\ddots&\ddots&\ddots&\ddots & \ddots \end{pmatrix}\begin{pmatrix}w_0 \\ w_1 \\\vdots \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\\vdots \end{pmatrix} $$ This can be observed exactly when we note that this is the ODE with $\tilde a(z)$ defined as $$ \tilde a(z) = C\sum_{k=0} r^{-k} z^k = {C r \over r-z} $$ This motivates multiplying the equation by $z-r$, or in coefficient space, by: $$ \begin{pmatrix} 1 \\ -1 & r \\ &-1 & r \\ &&\ddots & \ddots \end{pmatrix} $$ which simplifies things: $$ \begin{pmatrix} 1 \\ -1-Cr & r \\ & -1-Cr & 2r \\ & & -2-Cr & 3r \\ & & & -3-Cr & 4r \\ &&&&\ddots & \ddots \end{pmatrix}\begin{pmatrix}w_0 \\ w_1 \\\vdots \end{pmatrix} = \begin{pmatrix} 1 \\ -1 \\0 \\\vdots \end{pmatrix} $$ Therefore we have $$ w_k = r^{-1}(1 + C r/k) w_{k-1} = r^{-2}(1 + C r/k)(1 + C r/(k-1)) w_{k-2} = \cdots = r^{-k}(1 + C r/k) \cdots (1 + C r) w_1 $$
With a bit of work, it can be shown that the product is uniformly bounded, giving us $O(r^{-k})$ decay.

⬛️

Remark This proof can be adapted to the vector-valued case, which gives the equivalent result for $$ u''(z) + a(z) u'(z) + b(z) u(z) = 0 $$ that the radius of convergence is the smaller of radius convergence.

 Analytic continuation

We know $u$ is analytic around $z_0$ with a non-zero radius. But given a curve, we can re-expand around another point inside the radius of convergence of the first point, to get analyticity in another circle:


In [25]:
γ = Arc(0.,1., (0,π))

p = plot(γ; label="contour")
scatter!([0.],[0.]; label="singularity of a")
r = 0.5
for k = linspace(0.,π,10)
    plot!(Circle(exp(im*k),r); color=:green)
end
p


Out[25]:
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 -0.5 0.0 0.5 1.0 1.5 contour singularity of a y3 y4 y5 y6 y7 y8 y9 y10 y11 y12

In this sense, provided $a$ is analytic in a neighbourhood of $\gamma$, $u$ can be analytically continued along $\gamma$. Note as soon as this analytic continuation wraps back to itself we have no guarantee.